20 research outputs found

    Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs

    Full text link
    We introduce a dynamic mechanism for the solution of analytically-tractable substructure in probabilistic programs, using conjugate priors and affine transformations to reduce variance in Monte Carlo estimators. For inference with Sequential Monte Carlo, this automatically yields improvements such as locally-optimal proposals and Rao-Blackwellization. The mechanism maintains a directed graph alongside the running program that evolves dynamically as operations are triggered upon it. Nodes of the graph represent random variables, edges the analytically-tractable relationships between them. Random variables remain in the graph for as long as possible, to be sampled only when they are used by the program in a way that cannot be resolved analytically. In the meantime, they are conditioned on as many observations as possible. We demonstrate the mechanism with a few pedagogical examples, as well as a linear-nonlinear state-space model with simulated data, and an epidemiological model with real data of a dengue outbreak in Micronesia. In all cases one or more variables are automatically marginalized out to significantly reduce variance in estimates of the marginal likelihood, in the final case facilitating a random-weight or pseudo-marginal-type importance sampler for parameter estimation. We have implemented the approach in Anglican and a new probabilistic programming language called Birch.Comment: 13 pages, 4 figure

    Implementation processes in a cognitive rehabilitation intervention for people with dementia: a complexity-informed qualitative analysis

    Get PDF
    © Author(s) (or their employer(s)) 2021. Re-use permitted under CC BY. Published by BMJ. https://creativecommons.org/licenses/by/4.0/Objectives: Healthcare is often delivered through complex interventions. Understanding how to implement these successfully is important for optimising services. This article demonstrates how the complexity theory concept of ‘self-organisation’ can inform implementation, drawing on a process evaluation within a randomised controlled trial of the GREAT (Goal-oriented cognitive Rehabilitation in Early-stage Alzheimer’s and related dementias: a multi-centre single-blind randomised controlled Trial) intervention which compared a cognitive rehabilitation intervention for people with dementia with usual treatment. Design: A process evaluation examined experiences of GREAT therapists and participants receiving the intervention, through thematic analysis of a focus group with therapists and interviews with participants and their carers. Therapy records of participants receiving the intervention were also analysed using adapted framework analysis. Analysis adopted a critical realist perspective and a deductive-inductive approach to identify patterns in how the intervention operated. Setting: The GREAT intervention was delivered through home visits by therapists, in eight regions in the UK. Participants: Six therapists took part in a focus group, interviews were conducted with 25 participants and 26 carers, and therapy logs for 50 participants were analysed. Intervention: A 16-week cognitive rehabilitation programme for people with mild-to-moderate dementia. Results: ‘Self-organisation’ of the intervention occurred through adaptations made by therapists. Adaptations included simplifying the intervention for people with greater cognitive impairment, and extending it to meet additional needs. Relational work by therapists produced an emergent outcome of ‘social support’. Self-organised aspects of the intervention were less visible than formal components, but were important aspects of how it operated during the trial. This understanding can help to inform future implementation. Conclusions: Researchers are increasingly adopting complexity theory to understand interventions. This study extends the application of complexity theory by demonstrating how ‘self-organisation’ was a useful concept for understanding aspects of the intervention that would have been missed by focusing on formal intervention components. Analysis of self-organisation could enhance future process evaluations and implementation studies. Trial registration number: ISRCTN21027481.Peer reviewedFinal Published versio

    Optimizing insect metabarcoding using replicated mock communities

    Get PDF
    1. Metabarcoding (high-throughput sequencing of marker gene amplicons) has emerged as a promising and cost-effective method for characterizing insect community samples. Yet, the methodology varies greatly among studies and its performance has not been systematically evaluated to date. In particular, it is unclear how accurately metabarcoding can resolve species communities in terms of presence-absence, abundance and biomass.2. Here we use mock community experiments and a simple probabilistic model to evaluate the effect of different DNA extraction protocols on metabarcoding performance. Specifically, we ask four questions: (Q1) How consistent are the recovered community profiles across replicate mock communities?; (Q2) How does the choice of lysis buffer affect the recovery of the original community?; (Q3) How are community estimates affected by differing lysis times and homogenization? and (Q4) Is it possible to obtain adequate species abundance estimates through the use of biological spike-ins?3. We show that estimates are quite variable across community replicates. In general, a mild lysis protocol is better at reconstructing species lists and approximate counts, while homogenization is better at retrieving biomass composition. Small insects are more likely to be detected in lysates, while some tough species require homogenization to be detected. Results are less consistent across biological replicates for lysates than for homogenates. Some species are associated with strong PCR amplification bias, which complicates the reconstruction of species counts. Yet, with adequate spike -in data, species abundance can be determined with roughly 40% standard error for homogenates, and with roughly 50% standard error for lysates, under ideal conditions. In the latter case, however, this often requires species-specific reference data, while spike -in data generalize better across species for homogenates.4. We conclude that a nondestructive, mild lysis approach shows the highest promise for the presence/absence description of the community, while also allowing future morphological or molecular work on the material. However, homogeniza- tion protocols perform better for characterizing community composition, in par- ticular in terms of biomass

    Goal-oriented cognitive rehabilitation in early-stage dementia: study protocol for a multi-centre single-blind randomised controlled trial (GREAT).

    Get PDF
    yesBackground: Preliminary evidence suggests that goal-oriented cognitive rehabilitation (CR) may be a clinically effective intervention for people with early-stage Alzheimer's disease, vascular or mixed dementia and their carers. This study aims to establish whether CR is a clinically effective and cost-effective intervention for people with early-stage dementia and their carers. Methods/design: In this multi-centre, single-blind randomised controlled trial, 480 people with early-stage dementia, each with a carer, will be randomised to receive either treatment as usual or cognitive rehabilitation (10 therapy sessions over 3 months, followed by 4 maintenance sessions over 6 months). We will compare the effectiveness of cognitive rehabilitation with that of treatment as usual with regard to improving self-reported and carer-rated goal performance in areas identified as causing concern by people with early-stage dementia; improving quality of life, self-efficacy, mood and cognition of people with early-stage dementia; and reducing stress levels and ameliorating quality of life for carers of participants with early-stage dementia. The incremental cost-effectiveness of goal-oriented cognitive rehabilitation compared to treatment as usual will also be examined. Discussion: If the study confirms the benefits and cost-effectiveness of cognitive rehabilitation, it will be important to examine how the goal-oriented cognitive rehabilitation approach can most effectively be integrated into routine health-care provision. Our aim is to provide training and develop materials to support the implementation of this approach following trial completion. Trial registration: Current Controlled Trials ISRCTN2102748

    Individual goal-oriented cognitive rehabilitation to improve everyday functioning for people with early-stage dementia: a multi-centre randomised controlled trial (the GREAT trial)

    Get PDF
    YesObjectives: To determine whether individual goal-oriented cognitive rehabilitation (CR) improves everyday functioning for people with mild-to-moderate dementia. Design and methods: Parallel group multi-centre single-blind randomised controlled trial (RCT) comparing CR added to usual treatment (CR) with usual treatment alone (TAU) for people with an ICD-10 diagnosis of Alzheimer’s, vascular or mixed dementia and mild-to-moderate cognitive impairment (MMSE score ≥ 18), and with a family member willing to contribute. Participants allocated to CR received ten weekly sessions over three months and four maintenance sessions over six months. Participants were followed up three and nine months post-randomisation by blinded researchers. The primary outcome was self-reported goal attainment at three months. Secondary outcomes at three and nine months included informant-reported goal attainment, quality of life, mood, self-efficacy, and cognition, and study partner stress and quality of life. Results: We randomised (1:1) 475 people with dementia; 445 (CR=281) were included in the intention to treat analysis at three months, and 426 (CR=208) at nine months. At three months there were statistically-significant large positive effects for participant-rated goal attainment (d=0.97, 95% CI 0.75 to 1.19), corroborated by informant ratings (d=1.11, 0.89 to 1.34). These effects were maintained at nine months for both participant (d=0.94, 0.71 to 1.17) and informant ratings (d=0.96, 0.73 to 1.2). The observed gains related to goals directly targeted in the therapy. There were no significant differences in secondary outcomes. Conclusions: Cognitive rehabilitation enables people with early-stage dementia to improve their everyday functioning in relation to individual goals targeted in the therapy.National Institute for Health, Health Technology Assessment Programme, Grant/Award Number: 11/15/0

    Probabilistic Programming for Birth-Death Models of Evolution

    No full text
    Phylogenetic birth-death models constitute a family of generative models of evolution. In these models an evolutionary process starts with a single species at a certain time in the past, and the speciations—splitting one species into two descendant species—and extinctions are modeled as events of non-homogenous Poisson processes. Different birth-death models admit different types of changes to the speciation and extinction rates. The result of an evolutionary process is a binary tree called a phylogenetic tree, or phylogeny, with the root representing the single species at the origin,  internal nodes speciation events, and leaves currently living—extant—species (in the present time) and extinction events (in the past). Usually only a part of this tree, corresponding to the evolution of the extant species and their ancestors, is known via reconstruction from e.g. genomic sequences of these extant species. The task of our interest is to estimate the parameters of birth-death models given this reconstructed tree as the observation. While encoding the generative birth-death models as computer programs is easy and straightforward, developing and implementing bespoke inference algorithms are not. This complicates prototyping, development, and deployment of new birth-death models. Probabilistic programming is a new approach in which the generative models are encoded as computer programs in languages that include support for random variables, conditioning on the observed data, as well as automatic inference. This thesis is based on a collection of papers in which we demonstrate how to use probabilistic programming to solve the above-mentioned task of parameter inference in birth-death models. We show how these models can be implemented as simple programs in probabilistic programming languages. Our contribution also includes general improvements of the automatic inference methods

    Probabilistic Programming for Birth-Death Models of Evolution

    No full text
    Phylogenetic birth-death models constitute a family of generative models of evolution. In these models an evolutionary process starts with a single species at a certain time in the past, and the speciations—splitting one species into two descendant species—and extinctions are modeled as events of non-homogenous Poisson processes. Different birth-death models admit different types of changes to the speciation and extinction rates. The result of an evolutionary process is a binary tree called a phylogenetic tree, or phylogeny, with the root representing the single species at the origin,  internal nodes speciation events, and leaves currently living—extant—species (in the present time) and extinction events (in the past). Usually only a part of this tree, corresponding to the evolution of the extant species and their ancestors, is known via reconstruction from e.g. genomic sequences of these extant species. The task of our interest is to estimate the parameters of birth-death models given this reconstructed tree as the observation. While encoding the generative birth-death models as computer programs is easy and straightforward, developing and implementing bespoke inference algorithms are not. This complicates prototyping, development, and deployment of new birth-death models. Probabilistic programming is a new approach in which the generative models are encoded as computer programs in languages that include support for random variables, conditioning on the observed data, as well as automatic inference. This thesis is based on a collection of papers in which we demonstrate how to use probabilistic programming to solve the above-mentioned task of parameter inference in birth-death models. We show how these models can be implemented as simple programs in probabilistic programming languages. Our contribution also includes general improvements of the automatic inference methods

    Suspension Analysis and Selective Continuation-Passing Style for Higher-Order Probabilistic Programming Languages

    No full text
    Probabilistic programming languages (PPLs) make encoding and automatically solving statistical inference problems relatively easy by separating models from the inference algorithm. A popular choice for solving inference problems is to use Monte Carlo inference algorithms. For higher-order functional PPLs, these inference algorithms rely on execution suspension to perform inference, most often enabled through a full continuation-passing style (CPS) transformation. However, standard CPS transformations for PPL compilers introduce significant overhead, a problem the community has generally overlooked. State-of-the-art solutions either perform complete CPS transformations with performance penalties due to unnecessary closure allocations or use efficient, but complex, low-level solutions that are often not available in high-level languages. In contrast to prior work, we develop a new approach that is both efficient and easy to implement using higher-order languages. Specifically, we design a novel static suspension analysis technique that determines the parts of a program that require suspension, given a particular inference algorithm. The analysis result allows selectively CPS transforming the program only where necessary. We formally prove the correctness of the suspension analysis and implement both the suspension analysis and selective CPS transformation in the Miking CorePPL compiler. We evaluate the implementation for a large number of Monte Carlo inference algorithms on real-world models from phylogenetics, epidemiology, and topic modeling. The evaluation results demonstrate significant improvements across all models and inference algorithms.QC 20230227</p
    corecore